Contracting Strategy Based on Markov Process Modeling

نویسندگان

  • Sunju Park
  • Edmund H. Durfee
چکیده

One of the fundamental activities in multiagent systems is the exchange of tasks among agents (Davis & Smith 1983). In particular, we are interested in contracts among selfinterested agents (Sandholm & Lesser 1995), where a contractor desires to find a contractee that will perform the task for the lowest payment, and a contractee wants to perform tasks that maximize its profit (payment received less the cost of doing the task). Multiple, concurrent contracts take place such that a contract may be retracted because of other contracts. In our work, we are asking the question: What payment should a contractor offer to maximize its expected utility? If the contractor knows the costs of the agents and knows that the agent(s) with the minimum cost are available, then it can offer to pay some small amount above that cost. But the contractor usually will face uncertainty: it might have only probabilistic information about the costs of other agents for a task, and also about their current and future availability. A risk-averse contractor therefore needs to offer a payment that is not only likely to be acceptable to some contractee, but which also is sufficiently high that the contractee will be unlikely to retract on the deal as other tasks are announced by other contractors. A risk-taking contractor, on the other hand, may want to pay a little less and risk non-acceptance or eventual retraction. This abstract defines the contractor’s decision problem, and presents a contracting strategy by which the contractor can determine an optimal payment to offer. The contractor’s decision problem in the contracting process is to find a payment that maximizes its expected utility. The contractor’s utility for the payment, p, is defined as Ps x U(Payofss(p)) + PF x U(PayofliAp)), where UC.) is the utility function, Ps/F denote the probability of success (S) and failure (F) of accomplishing a contract, and Payofw are the payoff of S and F, respectively, given p. We have developed a four-step contracting strategy for the contractor to compute Psn; and Payo& and thus to find the best payment to offer. First, the contractor models the future contracting process stochastically as a Markov Process (MP). An example MP model is shown in Figure l-(a). StateZ is the initial state, and state A is the announced state. State C is the contracted state, where the contractor has awarded the task to one of those who accepted its offer. State Sand F are success and failure states, respectively. From A, the

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An Optimal Contracting Strategy in a Digital Library

Agents can benefit from contracting some of their tasks that cannot be performed by themselves or that can be performed more efficiently by other agents. Developing an agent’s contracting strategy in the University of Michigan Digital Library (UMDL), however, is not easy for the following reasons. The UMDL consists of self-interested agents who will perform a task of another agent’s only when d...

متن کامل

Availability analysis of mechanical systems with condition-based maintenance using semi-Markov and evaluation of optimal condition monitoring interval

Maintenance helps to extend equipment life by improving its condition and avoiding catastrophic failures. Appropriate model or mechanism is, thus, needed to quantify system availability vis-a-vis a given maintenance strategy, which will assist in decision-making for optimal utilization of maintenance resources. This paper deals with semi-Markov process (SMP) modeling for steady state availabili...

متن کامل

Financial Risk Modeling with Markova Chain

Investors use different approaches to select optimal portfolio. so, Optimal investment choices according to return can be interpreted in different models. The traditional approach to allocate portfolio selection called a mean - variance explains. Another approach is Markov chain. Markov chain is a random process without memory. This means that the conditional probability distribution of the nex...

متن کامل

astic Strategy act ead

In multiagent systems consisting of self-interested agents, forming a contract often requires complex, strategic thinking (Rosenschein & Zlotkin 1994, Vidal & Durfee 1996). In this abstract, we describe a stochastic contracting strategy for a utility-maximizing agent and discuss the impact of deliberation overhead on its performance.. In a contracting situation, an agent often faces many factor...

متن کامل

Effects of user modeling on POMDP-based dialogue systems

Partially observable Markov decision processes (POMDPs) have gained significant interest in research on spoken dialogue systems, due to among many benefits its ability to naturally model the dialogue strategy selection problem under the unreliability in automated speech recognition. However, the POMDP approaches are essentially model-based, and as a result, the dialogue strategy computed from P...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1996